最近,与常规像素的隐性表示相比,视频的图像隐式神经表示,其有希望的结果和迅速的速度因其有希望的结果和迅速的速度而受欢迎。但是,网络结构内的冗余参数在扩大理想性能时会导致大型模型大小。这种现象的关键原因是神经的耦合公式,该公式直接从框架索引输入中输出视频帧的空间和时间信息。在本文中,我们提出了E-NERV,它通过将图像的隐式神经代表分解为单独的空间和时间上下文来显着加快神经的速度。在这种新公式的指导下,我们的模型大大降低了冗余模型参数,同时保留表示能力。我们从实验上发现,我们的方法可以通过更少的参数改善性能,从而使收敛的速度更快地提高了$ 8 \ times $。代码可在https://github.com/kyleleey/e-nerv上找到。
translated by 谷歌翻译
最近,自我关注操作员将卓越的性能作为视觉模型的独立构建块。然而,现有的自我关注模型通常是手动设计的,从CNN修改,并仅通过堆叠一个操作员而获得。很少探索相结合不同的自我关注操作员和卷积的更广泛的建筑空间。在本文中,我们探讨了具有权重共享神经结构搜索(NAS)算法的新颖建筑空间。结果架构被命名为Triomet,用于组合卷积,局部自我关注和全球(轴向)自我关注操作员。为了有效地搜索在这个巨大的建筑空间中,我们提出了分层采样,以便更好地培训超空网。此外,我们提出了一种新的重量分享策略,多头分享,专门针对多头自我关注运营商。我们搜索的Tri of将自我关注和卷积相结合优于所有独立的模型,在想象网分类上具有较少的拖鞋,自我关注比卷积更好。此外,在各种小型数据集上,我们观察对自我关注模型的劣等性能,但我们的小脚仍然能够匹配这种情况下的最佳操作员,卷积。我们的代码可在https://github.com/phj128/trionet提供。
translated by 谷歌翻译
在本文中,我们解决了单眼散景合成的问题,我们试图从单个全焦点图像中呈现浅深度图像。与DSLR摄像机不同,由于移动光圈的物理限制,这种效果无法直接在移动摄像机中捕获。因此,我们提出了一种基于网络的方法,该方法能够从单个图像输入中渲染现实的单眼散景。为此,我们根据预测的单眼深度图引入了三个新的边缘感知散景损失,该图在模糊背景时锐化了前景边缘。然后,使用对抗性损失对该模型进行固定,从而产生逼真的玻璃效果。实验结果表明,我们的方法能够在处理复杂场景的同时产生令人愉悦的自然散景效果,并具有锋利的边缘。
translated by 谷歌翻译
间接飞行时间(ITOF)相机是一个有希望的深度传感技术。然而,它们容易出现由多路径干扰(MPI)和低信噪比(SNR)引起的错误。传统方法,在去噪后,通过估计编码深度的瞬态图像来减轻MPI。最近,在不使用中间瞬态表示的情况下,共同去噪和减轻MPI的数据驱动方法已经成为最先进的。在本文中,我们建议重新审视瞬态代表。使用数据驱动的Priors,我们将其插入/推断ITOF频率并使用它们来估计瞬态图像。给定直接TOF(DTOF)传感器捕获瞬态图像,我们将我们的方法命名为ITOF2DTOF。瞬态表示是灵活的。它可以集成与基于规则的深度感测算法,对低SNR具有强大,并且可以处理实际上出现的模糊场景(例如,镜面MPI,光学串扰)。我们在真正深度传感方案中展示了先前方法上的ITOF2DTOF的好处。
translated by 谷歌翻译
尽管人类可以通过利用对内容的高级理解的传统或最新学习的图像压缩编解码器来毫不费力地将复杂的视觉场景转变为简单的单词,而另一种方式似乎并没有利用视觉内容的语义含义。潜在的。此外,它们主要集中在率延伸上,并且在感知质量上的表现不佳,尤其是在低比特率方案中,并且常常无视下游计算机视觉算法的性能,这是一个快速增长的压缩图像的快速消费者组。在本文中,我们(1)提出了一个通用框架,该框架可以使任何图像编解码器能够利用高级语义,(2)研究感知质量和失真的关节优化。我们的想法是,鉴于任何编解码器,我们利用高级语义来增强其提取的低级视觉特征,并产生基本上的新的语义意识编解码器。我们提出了一个三相训练方案,该方案教授语义意识的编解码器来利用语义的力量来共同优化速率感知渗透率(R-PD)的性能。作为另一个好处,语义感知的编解码器还提高了下游计算机视觉算法的性能。为了验证我们的主张,我们进行了广泛的经验评估,并提供定量和定性结果。
translated by 谷歌翻译
Despite its importance for federated learning, continuous learning and many other applications, on-device training remains an open problem for EdgeAI. The problem stems from the large number of operations (e.g., floating point multiplications and additions) and memory consumption required during training by the back-propagation algorithm. Consequently, in this paper, we propose a new gradient filtering approach which enables on-device DNN model training. More precisely, our approach creates a special structure with fewer unique elements in the gradient map, thus significantly reducing the computational complexity and memory consumption of back propagation during training. Extensive experiments on image classification and semantic segmentation with multiple DNN models (e.g., MobileNet, DeepLabV3, UPerNet) and devices (e.g., Raspberry Pi and Jetson Nano) demonstrate the effectiveness and wide applicability of our approach. For example, compared to SOTA, we achieve up to 19$\times$ speedup and 77.1% memory savings on ImageNet classification with only 0.1% accuracy loss. Finally, our method is easy to implement and deploy; over 20$\times$ speedup and 90% energy savings have been observed compared to highly optimized baselines in MKLDNN and CUDNN on NVIDIA Jetson Nano. Consequently, our approach opens up a new direction of research with a huge potential for on-device training.
translated by 谷歌翻译
Cybercriminals are moving towards zero-day attacks affecting resource-constrained devices such as single-board computers (SBC). Assuming that perfect security is unrealistic, Moving Target Defense (MTD) is a promising approach to mitigate attacks by dynamically altering target attack surfaces. Still, selecting suitable MTD techniques for zero-day attacks is an open challenge. Reinforcement Learning (RL) could be an effective approach to optimize the MTD selection through trial and error, but the literature fails when i) evaluating the performance of RL and MTD solutions in real-world scenarios, ii) studying whether behavioral fingerprinting is suitable for representing SBC's states, and iii) calculating the consumption of resources in SBC. To improve these limitations, the work at hand proposes an online RL-based framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day attacks in SBC. The framework considers behavioral fingerprinting to represent SBCs' states and RL to learn MTD techniques that mitigate each malicious state. It has been deployed on a real IoT crowdsensing scenario with a Raspberry Pi acting as a spectrum sensor. More in detail, the Raspberry Pi has been infected with different samples of command and control malware, rootkits, and ransomware to later select between four existing MTD techniques. A set of experiments demonstrated the suitability of the framework to learn proper MTD techniques mitigating all attacks (except a harmfulness rootkit) while consuming <1 MB of storage and utilizing <55% CPU and <80% RAM.
translated by 谷歌翻译
Model Predictive Controllers (MPC) are widely used for controlling cyber-physical systems. It is an iterative process of optimizing the prediction of the future states of a robot over a fixed time horizon. MPCs are effective in practice, but because they are computationally expensive and slow, they are not well suited for use in real-time applications. Overcoming the flaw can be accomplished by approximating an MPC's functionality. Neural networks are very good function approximators and are faster compared to an MPC. It can be challenging to apply neural networks to control-based applications since the data does not match the i.i.d assumption. This study investigates various imitation learning methods for using a neural network in a control-based environment and evaluates their benefits and shortcomings.
translated by 谷歌翻译
The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adversarial Table Perturbation (ATP) as a new attacking paradigm to measure the robustness of Text-to-SQL models. Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs. All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices. To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data. Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations. We release our benchmark and code at: https://github.com/microsoft/ContextualSP.
translated by 谷歌翻译
A practical issue of edge AI systems is that data distributions of trained dataset and deployed environment may differ due to noise and environmental changes over time. Such a phenomenon is known as a concept drift, and this gap degrades the performance of edge AI systems and may introduce system failures. To address this gap, a retraining of neural network models triggered by concept drift detection is a practical approach. However, since available compute resources are strictly limited in edge devices, in this paper we propose a lightweight concept drift detection method in cooperation with a recently proposed on-device learning technique of neural networks. In this case, both the neural network retraining and the proposed concept drift detection are done by sequential computation only to reduce computation cost and memory utilization. Evaluation results of the proposed approach shows that while the accuracy is decreased by 3.8%-4.3% compared to existing batch-based detection methods, it decreases the memory size by 88.9%-96.4% and the execution time by 1.3%-83.8%. As a result, the combination of the neural network retraining and the proposed concept drift detection method is demonstrated on Raspberry Pi Pico that has 264kB memory.
translated by 谷歌翻译